Future of AI

Adopting AI: Three Big Issues

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

Following on from the Robotics & AI Industry Showcase, Dr Caroline Chibelushi, Knowledge Transfer Manager for AI at KTN, explains the three big issues slowing the adoption of AI in the UK and what we can do to get overcome them…

Back in May we all attended one of the industry’s biggest events, the Robotics & AI Industry Showcase.  There were some incredible projects and collaborations on display and it was very clear that the UK is at the forefront of robotics and AI innovation. 

However, just before the event, IBM published its Global AI Adoption Index which showed, once again, that the UK is actually far behind Europe in terms of the adoption of those AI innovations. 

Naturally, this issue was a hot topic among the people I spoke with at the Showcase.  The main question that kept coming up was “what can we do about it”, but before we can answer that I think we need to understand why UK plc is slow to adopt AI.  In my view, there are three big issues.

First and foremost is the matter of risk and fear.  The concept of AI is so broad, deep and new, that it feels dangerous.  Business decisionmakers don’t intuitively know who to turn to if AI fails, or who within the business should be responsible for it.  And it certainly doesn’t help that we regularly hear the scare stories of systems being hacked and data getting exposed, causing massive legal and reputational fallout to the companies involved.

Employees could have concerns too, imagining AI as some sort of Big Brother, or perhaps even a potential job-stealing replacement, causing unease and potentially even poor performance and staff losses.

Secondly, the foundations aren’t there for most companies.  Almost all UK businesses operate with huge swathes of raw, unprocessed data, all kept in silos, collected without format or purpose.  Understanding the value of this data takes time and money to analyse it, and without knowing the value going in, it can be hard to justify the required budget to do so.

Likewise, staff need to be trained to use AI software and to understand why and how to collect the data it needs.  As the old saying goes, Garbage In, Garbage Out; the quality of an AI output is dependent on quality of its input data, how it is set up and used.

Finally, there is the issue of AI black boxes.  Users often don’t understand why their AI tool is suggesting what it’s suggesting, they have no idea how it came to that conclusion.  This obviously impacts trust and raises the stakes all the more because users could be operating a tool without realising it is broken until they receive external complaints. 

Black boxes also compound issues of bias.  The AI industry is predominantly white and male, but if an AI tool is created by white males and is tested on other white males, bias against other groups can slip in completely by accident.  This becomes difficult to discover if the AI decision making is being made behind the curtain and will only become evident in a public sphere, causing embarrassment and reputational damage to the companies involved.

To tackle these issues, the AI industry, its customers, government and academia need to work together. 

Businesses will require more government support to prepare themselves to adopt AI. Things like the KTN AI Toolkit have proved incredibly successful in encouraging organisations to adopt AI safely and securely, including NHS Trusts, government departments, councils and other public organisations, and it is being developed further into an online interactive tool for AI consumers.  But that’s just the tip of the iceberg in terms of what could be offered.  

Businesses themselves need to start embedding AI into their strategies going forward and start budgeting for it, with an understanding that AI is experimental and requires an iterative approach for continuous development and improvement, with guidance from their AI suppliers.

AI will also need contributions from the academic sector.  Universities should not only have a responsibility to conduct AI related research and generate AI talent, but they also need to take on board the responsibility of training decision makers in the companies/organisations to understand AI, because those decision makers and their lack of understanding are one of the biggest blockers to the adoption of AI.

And finally, the AI companies need to make sure they’re hiring diversely and creating and testing their software with diversity consciously in mind.

When I went to the Robotics & AI Industry Showcase, I was struck with how incredible the UK AI scene is.  With a proactive and collaborative approach, we can help UK industry make use of our cutting-edge technology and contribute a great deal to the economy.  All we need to do, is work together! To attend the next Robotics & AI Industry Showcase, visit www.raishowcase.com.

Author

Related Articles

Back to top button